45 research outputs found

    Settling for limited privacy: how much does it help?

    Get PDF
    This thesis explores practical and theoretical aspects of several privacy-providing technologies, including tools for anonymous web-browsing, verifiable electronic voting schemes, and private information retrieval from databases. State-of-art privacy-providing schemes are frequently impractical for implementational reasons or for sheer information-theoretical reasons due to the amount of information that needs to be transmitted. We have been researching the question of whether relaxing the requirements on such schemes, in particular settling for imperfect but sufficient in real-world situations privacy, as opposed to perfect privacy, may be helpful in producing more practical or more efficient schemes. This thesis presents three results. The first result is the introduction of caching as a technique for providing anonymous web-browsing at the cost of sacrificing some functionality provided by anonymizing systems that do not use caching. The second result is a coercion-resistant electronic voting scheme with nearly perfect privacy and nearly perfect voter verifiability. The third result consists of some lower bounds and some simple upper bounds on the amount of communication in nearly private information retrieval schemes; our work is the first in-depth exploration of private information schemes with imperfect privacy

    LZfuzz: a fast compression-based fuzzer for poorly documented protocols

    Get PDF
    Real-world infrastructure offers many scenarios where protocols (and other details) are not released due to being considered too sensitive or for other reasons. This situation makes it hard to apply fuzzing techniques to test their security and reliability, since their full documentation is only available to their developers, and domain developer expertise does not necessarily intersect with fuzz-testing expertise (nor deployment responsibility). State-of-the-art fuzzing techniques, however, work best when protocol specifications are available. Still, operators whose networks include equipment communicating via proprietary protocols should be able to reap the benefits of fuzz-testing them. In particular, administrators should be able to test proprietary protocols in the absence of end-to-end application-level encryption to understand whether they can withstand injection of bad traffic, and thus be able to plan adequate network protection measures. Such protocols can be observed in action prior to fuzzing, and packet captures can be used to learn enough about the structure of the protocol to make fuzzing more efficient. Various machine learning approaches, e.g. bioinformatics methods, have been proposed for learning models of the targeted protocols. The problem with most of these approaches to date is that, although sometimes quite successful, they are very computationally heavy and thus are hardly practical for application by network administrators and equipment owners who cannot easily dedicate a compute cluster to such tasks. We propose a simple method that, despite its roughness, allowed us to learn facts useful for fuzzing from protocol traces at much smaller CPU and time costs. Our fuzzing approach proved itself empirically in testing actual proprietary SCADA protocols in an isolated control network test environment, and was also successful in triggering flaws in implementations of several popular commodity Internet protocols. Our fuzzer, LZfuzz (pronounced ``lazy-fuzz\u27\u27) relies on a variant of Lempel--Ziv compression algorithm to guess boundaries between the structural units of the protocol, and builds on the well-known free software GPF fuzzer

    Security Applications of Formal Language Theory

    Get PDF
    We present an approach to improving the security of complex, composed systems based on formal language theory, and show how this approach leads to advances in input validation, security modeling, attack surface reduction, and ultimately, software design and programming methodology. We cite examples based on real-world security flaws in common protocols representing different classes of protocol complexity. We also introduce a formalization of an exploit development technique, the parse tree differential attack, made possible by our conception of the role of formal grammars in security. These insights make possible future advances in software auditing techniques applicable to static and dynamic binary analysis, fuzzing, and general reverse-engineering and exploit development. Our work provides a foundation for verifying critical implementation components with considerably less burden to developers than is offered by the current state of the art. It additionally offers a rich basis for further exploration in the areas of offensive analysis and, conversely, automated defense tools and techniques. This report is divided into two parts. In Part I we address the formalisms and their applications; in Part II we discuss the general implications and recommendations for protocol and software design that follow from our formal analysis

    ELFbac: Using the Loader Format for Intent-Level Semantics and Fine-Grained Protection

    Get PDF
    Adversaries get software to do bad things by rewriting memory and changing control flow. Current approaches to protecting against these attacks leave many exposures; for example, OS-level filesystem protection and OS/architecture support of the userspace/kernelspace distinction fail to protect corrupted userspace code from changing userspace data. In this paper we present a new approach: using the ELF/ABI sections already produced by the standard binary toolchain to define, specify, and enforce fine-grained policy within an application\u27s address space. We experimentally show that enforcement of such policies would stop a large body of current attacks and discuss ways we could extend existing architecture to more efficiently provide such enforcement. Our approach is designed to work with existing ELF executables and the GNU build chain, but it can be extended into the compiler toolchains to support code annotations that take advantage of ELFbac enforcement-while maintaining full compatibility with the existing ELF ABI

    Design and prototype of a coercion-resistant, voter verifiable electronic voting system

    No full text
    Abstract — In elections, it is important that voters be able to verify that the tally reflects the sum of the votes that were actually cast, as they were intended to be cast. It is also important that voters not be subject to coercion from adversaries. Currently most proposed voting systems fall short: they either do not provide both properties, or require the voter to be a computer. In this paper, we present a new voting system that uses voter knowledge to allow voter verification by using a receipt that is uninformative for a coercer without access to the voting machine or the contents of the cast ballots. Our system does not assume any trust in the voting machine, but requires a few other assumptions which we believe to be reasonable in the real-world situation. A basic prototype of this system is available on our website. I

    Using Caching For Browsing Anonymity

    Get PDF
    this paper, we propose a caching proxy system for allowing users to retrieve data from the World-Wide Web in a way that would provide recipient unobservability by a third party and sender unobservability by the recipient and thus dispose with intersection attacks, and report on the prototype we built using Googl

    Software on the Witness Stand: What Should It Take for Us to Trust It

    No full text
    We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them. This paper is structured as follows: Part I is a case study of electronic evidence in a “file sharing ” copyright infringement case, potential trustworthiness issues involved, and ways we believe they should be addressed with state-of-the-art computing practices. Part II is a legal analysis of issues and practices surrounding the use of software-generated evidence by courts
    corecore